31,150 research outputs found

    Modeling The Time Variability of Accreting Compact Sources

    Get PDF
    We present model light curves for accreting Black Hole Candidates (BHC) based on a recently proposed model for their spectro-temporal properties. According to this model, the observed light curves and aperiodic variability of BHC are due to a series of soft photon injections at random (Poisson) intervals near the compact object and their reprocessing into hard radiation in an extended but non-uniform hot plasma corona surrounding the compact object. We argue that the majority of the timing characteristics of these light curves are due to the stochastic nature of the Comptonization process in the extended corona, whose properties, most notably its radial density dependence, are imprinted in them. We compute the corresponding Power Spectral Densities (PSD), autocorrelation functions, time skewness of the light curves and time lags between the light curves of the sources at different photon energies and compare our results to observation. Our model light curves compare well with observations, providing good fits to their overall morphology, as manifest by the autocorrelation and skewness functions. The lags and PSDs of the model light curves are also in good agreement with those observed (the model can even accommodate the presence of QPOs). Finally, while most of the variability power resides at time scales \gsim a few seconds, at the same time, the model allows also for shots of a few msec in duration, in accordance with observation. We suggest that refinements of this type of model along with spectral and phase lag information can be used to probe the structure of this class of high energy sources.Comment: 23 pages Latex, 15 encapsulated postscript figures, to appear in the Astrophysical Journa

    Sudden jumps and plateaus in the quench dynamics of a Bloch state

    Full text link
    We take a one-dimensional tight binding chain with periodic boundary condition and put a particle in an arbitrary Bloch state, then quench it by suddenly changing the potential of an arbitrary site. In the ensuing time evolution, the probability density of the wave function at an arbitrary site \emph{jumps indefinitely between plateaus}. This phenomenon adds to a former one in which the survival probability of the particle in the initial Bloch state shows \emph{cusps} periodically, which was found in the same scenario [Zhang J. M. and Yang H.-T., EPL, \textbf{114} (2016) 60001]. The plateaus support the scattering wave picture of the quench dynamics of the Bloch state. Underlying the cusps and jumps is the exactly solvable, nonanalytic dynamics of a Luttinger-like model, based on which, the locations of the jumps and the heights of the plateaus are accurately predicted.Comment: final versio

    3D UAV Trajectory and Communication Design for Simultaneous Uplink and Downlink Transmission

    Get PDF
    In this paper, we investigate the unmanned aerial vehicle (UAV)-Aided simultaneous uplink and downlink transmission networks, where one UAV acting as a disseminator is connected to multiple access points (AP), and the other UAV acting as a base station (BS) collects data from numerous sensor nodes (SNs). The goal of this paper is to maximize the system throughput by jointly optimizing the 3D UAV trajectory, communication scheduling, and UAV-AP/SN transmit power. We first consider a special case where the UAV-BS and UAV-AP trajectories are pre-determined. Although the resulting problem is an integer and non-convex optimization problem, a globally optimal solution is obtained by applying the polyblock outer approximation (POA) method based on the problem's hidden monotonic structure. Subsequently, for the general case considering the 3D UAV trajectory optimization, an efficient iterative algorithm is proposed to alternately optimize the divided sub-problems based on the successive convex approximation (SCA) technique. Numerical results demonstrate that the proposed design is able to achieve significant system throughput gain over the benchmarks. In addition, the SCA-based method can achieve nearly the same performance as the POA-based method with much lower computational complexity

    Probing the Structure of Accreting Compact Sources Through X-Ray Time Lags and Spectra

    Full text link
    We exhibit, by compiling all data sets we can acquire, that the Fourier frequency dependent hard X-ray lags, first observed in the analysis of aperiodic variability of the light curves of the black hole candidate Cygnus X-1, appear to be a property shared by several other accreting black hole candidate sources and also by the different spectral states of this source. We then present both analytic and numerical models of these time lags resulting by the process of Comptonization in a variety of hot electron configurations. We argue that under the assumption that the observed spectra are due to Comptonization, the dependence of the lags on the Fourier period provides a means for mapping the spatial density profile of the hot electron plasma, while the period at which the lags eventually level--off provides an estimate of the size of the scattering cloud. We further examine the influence of the location and spatial extent of the soft photon source on the form of the resulting lags for a variety of configurations; we conclude that the study of the X-ray hard lags can provide clues about these parameters of the Comptonization process too. Fits of the existing data with our models indicate that the size of the Comptonizing clouds are quite large in extent (∌\sim 1 light second) with inferred radial density profiles which are in many instances inconsistent with those of the standard dynamical models, while the extent of the source of soft photons appears to be much smaller than those of the hot electrons by roughly two orders of magnitude and its location consistent with the center of the hot electron corona.Comment: 20 pages Latex, 11 postscript figures, to appear in the Astrophysical Journal, Vol 512, Feb 20 issu

    The Complete Two-Loop Integrated Jet Thrust Distribution In Soft-Collinear Effective Theory

    Get PDF
    In this work, we complete the calculation of the soft part of the two-loop integrated jet thrust distribution in e+e- annihilation. This jet mass observable is based on the thrust cone jet algorithm, which involves a veto scale for out-of-jet radiation. The previously uncomputed part of our result depends in a complicated way on the jet cone size, r, and at intermediate stages of the calculation we actually encounter a new class of multiple polylogarithms. We employ an extension of the coproduct calculus to systematically exploit functional relations and represent our results concisely. In contrast to the individual contributions, the sum of all global terms can be expressed in terms of classical polylogarithms. Our explicit two-loop calculation enables us to clarify the small r picture discussed in earlier work. In particular, we show that the resummation of the logarithms of r that appear in the previously uncomputed part of the two-loop integrated jet thrust distribution is inextricably linked to the resummation of the non-global logarithms. Furthermore, we find that the logarithms of r which cannot be absorbed into the non-global logarithms in the way advocated in earlier work have coefficients fixed by the two-loop cusp anomalous dimension. We also show that, given appropriate L-loop contributions to the integrated hemisphere soft function, one can straightforwardly predict a number of potentially large logarithmic contributions at L loops not controlled by the factorization theorem for jet thrust.Comment: 52 pages, 5 figures; in v2: incorporated referee suggestions in text, including additional figures and footnotes for the purpose of clarification. v2 is the version published in PR

    Ranked List Loss for Deep Metric Learning

    Full text link
    The objective of deep metric learning (DML) is to learn embeddings that can capture semantic similarity and dissimilarity information among data points. Existing pairwise or tripletwise loss functions used in DML are known to suffer from slow convergence due to a large proportion of trivial pairs or triplets as the model improves. To improve this, ranking-motivated structured losses are proposed recently to incorporate multiple examples and exploit the structured information among them. They converge faster and achieve state-of-the-art performance. In this work, we unveil two limitations of existing ranking-motivated structured losses and propose a novel ranked list loss to solve both of them. First, given a query, only a fraction of data points is incorporated to build the similarity structure. Consequently, some useful examples are ignored and the structure is less informative. To address this, we propose to build a set-based similarity structure by exploiting all instances in the gallery. The learning setting can be interpreted as few-shot retrieval: given a mini-batch, every example is iteratively used as a query, and the rest ones compose the gallery to search, i.e., the support set in few-shot setting. The rest examples are split into a positive set and a negative set. For every mini-batch, the learning objective of ranked list loss is to make the query closer to the positive set than to the negative set by a margin. Second, previous methods aim to pull positive pairs as close as possible in the embedding space. As a result, the intraclass data distribution tends to be extremely compressed. In contrast, we propose to learn a hypersphere for each class in order to preserve useful similarity structure inside it, which functions as regularisation. Extensive experiments demonstrate the superiority of our proposal by comparing with the state-of-the-art methods.Comment: Accepted to T-PAMI. Therefore, to read the offical version, please go to IEEE Xplore. Fine-grained image retrieval task. Our source code is available online: https://github.com/XinshaoAmosWang/Ranked-List-Loss-for-DM
    • 

    corecore